Machine Vision|523 Article(s)
Research and Development of Chlorophyll Fluorescence Meter with Broadband Excitation Function
Qian Xia, Hao Tang, Wei Ge, Lijiang Fu, Dezhi Tong, and Ya Guo
Chlorophyll fluorescence detection technology is widely used as a nondestructive detection method; however, the frequency band of the step pulse or the modulated pulse (PAM) used in the traditional chlorophyll fluorescence detection technology to excite signals is narrow, and a photosynthetic system is a high-order broadband system, which is difficult to excite all the dynamic characteristics. These limit the information richness contained in chlorophyll fluorescence signals. Currently available commercial chlorophyll fluorescence meters do not contain broadband excitation. The absence of this function limits the ability of emerging artificial intelligence algorithms to process complex signals for mining rich information. Thus, in this study, we developed a chlorophyll fluorescence instrument with a broadband excitation function based on the pseudorandom binary sequence (PRBS) signal. The developed instrument can measure the traditional chlorophyll fluorescence induction OJIP and PAM kinetics. The information entropy of chlorophyll fluorescence of five different plants under three different light sources confirmed that the chlorophyll fluorescence excited by PRBS has the highest information entropy. In addition, the instrument can provide chlorophyll fluorescence signals with more information and is expected to contribute to a new scientific instrument for detecting plant physiology and environmental stress.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0815002 (2024)
Dense Feature Matching Based on Improved DFM Algorithm
Yanhan Zhang, Yinxin Zhang, Zhanhua Huang, and Kangnian Wang
Image matching, which refers to transforming the image to be matched into the coordinate system of the original image, plays important roles in numerous visual tasks. The feature-based image matching method, which can find distinctive features in the image, is widely accepted because of its applicability, robustness, and high accuracy. For improving the performance of feature matching, it is important to obtain more feature matches with high matching accuracy. Aiming at the sparse matching problem of the traditional feature matching algorithm, we propose a dense feature matching method based on the improved deep feature matching algorithm. First, a series of feature maps of the image are extracted through the VGG neural network, and nearest-neighbor matching is performed on the initial feature map to calculate the homography matrix and perform perspective transformation. Then, deep features are fused according to the frequency-domain matching characteristics of feature maps for coarse feature matching. Finally, fine feature matching is performed on the shallow feature map to correct the results of coarse feature matching. Experimental results indicate that the proposed algorithm is superior to other methods, as it obtains a larger number of matches with a higher matching accuracy.
Laser & Optoelectronics Progress
  • Publication Date: Apr. 25, 2024
  • Vol. 61, Issue 8, 0815001 (2024)
Accurate and Fast Primitive Detection Method for 3D Point Cloud Data
Min Shi, Shaoqing Zhou, Suqing Wang, and Dengming Zhu
Current detection methods for three dimensional (3D) point cloud data easily identify the local area of low-curvature cylindrical surfaces as planes in a model, but these methods can achieve the fast and accurate identification of only a single element. We propose a fast primitive detection method for point cloud data that can quickly and accurately detect both planar and cylindrical surfaces simultaneously. The proposed method is divided into two stages: coarse recognition and refinement. First, the point cloud is divided into small-grained patches, the patch characteristics are calculated, and the planar and cylindrical patches are roughly identified. Next, according to the filter conditions, the planar patches adjacent to the cylindrical patches are filtered, and then the patches with identical characteristics are combined to obtain the complete planar and cylindrical surfaces. Our experiments show that the proposed method is superior to two popular recognition methods when used to analyze data concerning five mechanical components. Moreover, the proposed method does not exhibit the omission and misidentification errors demonstrated by the other two methods, and the proposed method is more accurate in terms of the surface parameter estimation and segmentation when multiple cylindrical surfaces are connected.
Laser & Optoelectronics Progress
  • Publication Date: Feb. 25, 2024
  • Vol. 61, Issue 4, 0415006 (2024)
Measurement of Spherical Particle Pose by Monocular Vision
Jiangjie Li, Ming Kong, Lu Liu, Yunkun Zhao, and Jiangnan Chen
In order to realize the spatial pose tracking measurement of target particles in a fluidized bed, a pose measurement system based on monocular vision and color texture-coded spheres is developed. For spatial position positioning, a spatial sphere imaging model is established, and based on the pinhole plane imaging model and camera coordinate transformation model, combined with the related theory of spatial analytic geometry, the principle of monocular position measurement is analyzed. Considering that spherical particles cannot measure spatial attitude through their own shape features, the texture feature is introduced, and the measurement of spatial attitude is realized by extracting the texture of the target particle, comparing and establishing its similarity with the known direction image in the synthetic library. According to the above theoretical analysis, an experimental system is built and a series of experiments are carried out. The results show that the comprehensive error rate of position measurement is not more than 0.5%, and the error of attitude measurement is not more than 2°, which verifies the effectiveness and feasibility of the proposed model.
Laser & Optoelectronics Progress
  • Publication Date: Feb. 25, 2024
  • Vol. 61, Issue 4, 0415005 (2024)
3D Reconstruction of Neural Radiation Field Based on Improved Multiple Layer Perceptron
Yaofei Hou, Haisong Huang, Qingsong Fan, Jing Xiao, and Zhenggong Han
Neural radiation field (NeRF) exhibits excellent performances in implicit 3D reconstruction compared with traditional 3D reconstruction methods. However, the simple multilayer perceptron (MLP) model lacks local information in the sampling process, resulting in a fuzzy 3D reconstruction scene. To solve this issue, a multifeature joint learning (MFJL) method based on MLP is proposed in this study. First, an MFJL module was constructed between the embedding layer and the sampling layer of NeRF to effectively decode the multiview encoded input and supplement the missing local information of MLP model. Then, a gated channel transformation MLP (GCT-MLP) module was built between the sampling layer and the inference layer of NeRF to learn the interaction relations between higher-order features and control the information flow fed back to the MLP layer for the selection of ambiguous features. The experimental results reveal that the NeRF based on the improved MLP can avoid blurred views and aliasing in 3D reconstruction. The average peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and learned perceptual image patch similarity (LPIPS) values on the Real Forward-Facing dataset are 28.08 dB, 0.887, and 0.061; on the Realistic Synthetic 360° dataset are 32.75 dB, 0.960, and 0.026; and on the DTU dataset are 25.96 dB, 0.807, and 0.208, respectively. Overall, the proposed method has a better view reconstruction performance and can obtain clearer images and detailed texture features in subjective visual effects compared with NeRF.
Laser & Optoelectronics Progress
  • Publication Date: Feb. 25, 2024
  • Vol. 61, Issue 4, 0415004 (2024)
Three-Dimensional Object Detection Based on Multistage Information Enhancement in Point Clouds
Shanshuai Yuan, and Lei Ding
Voxel-based method is usually used in autonomous driving when conducting three-dimensional (3D) object detection based on a point cloud. This method is associated with small computational complexity and small latency. However, the current algorithms used in the industry often result in double information loss. Voxelization can bring information loss of point cloud. In addition, these algorithms do not entirely utilize the point cloud information after voxelization. Thus, this study designs a three-stage network to solve the problem of large information loss. In the first stage, an excellent voxel-based algorithm is used to output the proposal bounding box. In the second stage, the information on the feature map associated with the proposal is used to refine the bounding box, which aims to solve the problem of insufficient information utilization. The third stage uses the precise location of the original points, which make up for the information loss caused by voxelization. On the Waymo Open Dataset, the detection accuracy of the proposed multistage 3D object detection method is better than CenterPoint and other excellent algorithms favored by the industry. Meanwhile, it meets the requirement of latency for autonomous driving.
Laser & Optoelectronics Progress
  • Publication Date: Feb. 25, 2024
  • Vol. 61, Issue 4, 0415003 (2024)
Operation Control System for Movable Robotic Arm Based on Mixed Reality
Zeyu Dong, Zheng Liu, Yong Li, and Likun Hu
To manage the problem regarding inconvenient mobile operations in the field under special conditions, we have designed a mobile robotic arm operation control system based on mixed reality. The system is categorized into three modules: human-computer interaction, mechanical drive, and virtual reality. The human-computer interaction module delivers corresponding operation instructions after recognizing the body gestures of the operator via camera. These operating instructions are then analyzed by the mechanical drive module that provides the working status of the equipment. Furthermore, this feedback information is received by the virtual reality module, which then restores the operation of the equipment in the built virtual scene to realize its real-time monitoring. The test results on a mobile robot platform show that the operator can achieve remote precise control and real-time monitoring of the mobile robot arm through the proposed system, and the system response speed can reach 60?100 ms/frame.
Laser & Optoelectronics Progress
  • Publication Date: Feb. 25, 2024
  • Vol. 61, Issue 4, 0415002 (2024)
Planar Point Cloud Registration and Relative Pose Control-Based Assembly Method of Industrial Parts
Boyan Wei, Hongzhi Du, Ying Zhang, Yongjie Ren, and Yanbiao Sun
Aiming at the problems of low data registration success rate and insufficient pose control accuracy in traditional assembly methods for workpiece assembly tasks with planar weak geometric contour structures, an assembly method for industrial parts based on planar point cloud registration and relative pose control is proposed. First, the workpiece point cloud information is reconstructed by the binocular structured light sensor, and the obtained results are registered using the RANSAC-ICP algorithm. Then, the relationship between the plane normal vector feature and the maximum distance is corrected to achieve accurate point cloud registration. The relative pose relationship-based manipulator control method is proposed, which omits the calibration of the tool coordinate system and directly controls the manipulator movement by relative pose. Then, it takes the relative pose error as the evaluation standard of the control system error to realize the reliability assembly. Finally, experiments are conducted using a large industrial manipulator in a real test scenario. The results show that the registration success rate of the proposed method is improved by 85 percentage points compared to the traditional assembly method, and the automatic assembly accuracy is better than 0.5 mm, which means that the method can effectively solve the problem of planar workpiece assembly.
Laser & Optoelectronics Progress
  • Publication Date: Feb. 25, 2024
  • Vol. 61, Issue 4, 0415001 (2024)
Global Registration Method for Laser SLAM Point Clouds Based on Graph Optimization
Hao Tang, Dong Li, Cheng Wang, Sheng Nie, Jiayin Liu, and Ye Duan
To address the issue of drift errors and inadequate precision in point clouds produced by laser-based simultaneous localization and mapping (SLAM) algorithms during lengthy scanning trajectories, this study presents a global point cloud registration approach for laser SLAM that relies on graph optimization. We constructed initial and iterative pose graphs for cascaded optimization in succession for laser SLAM point clouds with specific drift errors. The pose graph is initially created using point cloud similarity and centroid distance of segments to reduce trajectory drift error, resulting in SLAM point clouds with smaller drift errors. From this, iterative pose graphs are formed based on the overlap of point clouds between segments. Subsequently, the point clouds are coarsely and finely adjusted in an iterative manner to produce higher precision SLAM point clouds. Experiments were performed in this paper using one set of handheld and three sets of vehicle-mounted laser SLAM data. After optimization, the point clouds of the four experimental data sets were well overlapped by their respective repeated scans. The distance root mean square error (RMSE) between the matched keypoints is reduced to 0.158, 0.211, 0.218, and 0.157 m from 2.667, 10.348, 19.018, and 3.412 m, respectively, before the optimization. Experimental results indicate that the proposed algorithm can resolve the issue of drift error during laser SLAM point cloud long trajectory scanning, ultimately improving the accuracy of the point cloud data.
Laser & Optoelectronics Progress
  • Publication Date: May. 25, 2024
  • Vol. 61, Issue 10, 1015003 (2024)
Attention-Based Multi-Stage Network for Point Cloud Completion
Xiyang Yin, Pei Zhou, and Jiangping Zhu
Point cloud completion refers to the process for reconstructing a complete 3D model using incomplete point cloud data. Most of the existing point cloud completion methods are limited by the point cloud disorder and irregularity, which makes it difficult to reconstruct the local detail information, thus affecting the completion accuracy. To solve this problem, an attention-based multi-stage network for point cloud completion is proposed. A pyramid feature extractor that satisfies the replacement invariance is designed to establish the dependence between points within a localization as well as the correlation between different localizations, so as to enhance the extraction of local information while extracting global feature information. In the point cloud reconstruction process, a coarse-to-fine completion method is adopted to first generate a low-resolution seed point cloud, and then gradually enrich the local details of the seed point cloud to obtain a finer and denser point cloud. Comparison results of the experiments conducted on the public dataset PCN demonstrate that the proposed network can effectively reconstruct the local detail information, and improves the completion accuracy by at least 5.98% over the existing methods. The ablation experimental results also further validate the effectiveness of the designed attention module.
Laser & Optoelectronics Progress
  • Publication Date: May. 25, 2024
  • Vol. 61, Issue 10, 1015002 (2024)